Skip to main content

Revised architecture


Architecture of the full system with details on each component and Minimum Viable Product (MVP) Architecture.


Full Architecture

Below lies a further developed design of the architecture for the full system, up to this point of technical assessment:

Event-Driven Middleware
Data Ingestion Service
Data Processing Service
Data Storage Service
Query API
Decision & Risk Management Service
Network Producers
Network Consumers
Policy service
Data Sink
Processor
Data
Governance
Time Series DB
Analytics DB
Policy DB
Authentication
Reverse Proxy
Dashboard
Logger & Audit
Monitoring & Logging
Dashboard
System Monitor
Model
Registry
Artifact
Storage
Communication Interface
Inference
Trainer
ML service
Performance
Monitor

Component Details

ComponentDescription
Data Processing Service
Processor
Fetches raw data from the Storage, processes it by following defined profiles, and sends it back to Storage for the respective database for this new, treated and handled data. Receives data from Kafka topics, applies processing logic including windowing strategies, and outputs data for usage on the Machine Learning Service.
Data Ingestion Service
Ingestion
Primary entry point for external network data (may be considered a data broker of sorts, to the inner system), including support for real-time data streaming for system status, via WebSocket. Receives raw network measurements, and publishes data to Kafka topics for storage and processing. Supports batch intake of data, making it flexible for various network data sources and collection methods.
Data Storage Service
Query API
Data Sink
Processor
Data
Governance
Time Series DB
Analytics DB
Storage
Manages persistent storage using a dual-database approach with InfluxDB for time-series data and ClickHouse for columnar analytics queries. Integrates with Kafka for data streaming and supports both historical analysis and real-time query patterns.
Communication Interface
Inference
Trainer
ML service
Performance
Monitor
Model
Registry
Artifact
Storage
Machine Learning
For Machine Learning (ML) components in the network analytics pipeline. Contains standardized patterns for implementing ML models. This service provides guidelines for adding ML models.
Decision & Risk Management Service
Decision
Receives inferences from ML, makes decisions, and assesses de risk of those. These validated decisions are then sent to Network Consumers subscribed to a respective topic
Reverse Proxy
Dashboard
Frontend
Includes improved real-time data visualization, intuitive monitoring interfaces, and ML model control panels, providing operators with great tools for network analytics and resource management. Is tethered to the NGINX Reverse Proxy component, for concealment of the internal system's addresses.
Authentication
Auth
Keycloak component for authentication to access role-specific features of the system.
Logger & Audit
Monitoring & Logging
Dashboard
System Monitor
Logs
Logging and monitoring system for tracking of overall components' status and performance.
Decision & Risk Management Service
Network Consumers
Consumer
Consumer component which will subscribe to a topic, in which it receives decisions that were previously validated.
Data Ingestion Service
Network Producers
Producer
Generates synthetic network data using the DoNext 5G/6G measurement dataset from Schippers et al. research. This module extracts and processes network measurement data to provide realistic test scenarios for the entire pipeline. It plays a crucial role in development, testing, and demonstration by providing consistent, reproducible network data that simulates real-world telecommunications scenarios.
Event-Driven Middleware
Kafka
Uses Apache Kafka as the message broker. Relates to the PyKafBridge module1, which handles Kafka producer/consumer operations, topic management, and message binding. This component enables asynchronous communication between all system components, serving as the backbone for real-time data streaming throughout the network analytics pipeline.
Policy service
Policy DB
Policy
Policy enforcement and management service, controlling what is and is not used for processing and inference-making.

MVP Architecture

As decided for development of the Construction phase, here lies the view of the architecture for the minimally viable product, in its core stage:

Event-Driven Middleware
Data Ingestion Service
Data Processing Service
Data Storage Service
Query API
Communication Interface
Inference
Trainer
ML service
Performance
Monitor
Decision & Risk Management Service
Network Producers
Network Consumers
Policy service
Data Sink
Processor
Data
Governance
Model
Registry
Artifact
Storage
Time Series DB
Analytics DB
Policy DB
Authentication
Reverse Proxy
Dashboard
Logger & Audit
Monitoring & Logging
Dashboard
System Monitor

Why?

One may question the reason for choosing these components and not some of the greyed out ones. Although it seems like a small volume of components missing, they are part of the full-on next step, aiming for privacy and security for data, access control, fully fledged logging, and management of decisions that will be casted out to network consumers. Those will be the main focus over the course of the next months for the project, to achieve state of art.

However, the project in its current state could be deployed in production for real-life scenarios, considering the implementation and internal modularity already provided to ease system adaptation to any given context.

Footnotes

  1. For more information see: Repository Division